Missed
  • Utilisation Law
  • Little's Law (Parameters are averages)
  • Response Time Law:
    • Just a variation on little's law
  • The Forced Flow Law
Let the fun begin
  • The Service Demand/Bottleneck Laws:
    • Service demand on a node is per visit
    • Everything is averages
    • Once you know service Demand, you can calculate a lot of other things
    • Service demand is Dk = Vk Sk
  • The node with the largest demand dictates the throughput (sine it is the bottleneck)
  • The X ≤ min(1/Dmax, N/D+Z) is applicable to max demand (?)
  • D is the minimum you can get (for Response time), R is always greater than or equal to D
  • The rest of the course will focus on modelling the throughput and performance graphs
  • Utilisation of a resource is the proportion of the time the resource is busy (doing useful work):
    • Which is also the probability that the resource is busy
    • Hence, 1 - U is the probability that the resource is idle!
  • What goes in must come out
Measuring and modelling
  • Monte Carlo: aggregate the results of a series of singular experiments using random numbers
  • Discrete Time -- perform n-step random traversal of a state transition system:
    • In this course "discrete" means steps
  • Discrete Event -- as above, but event transitions are triggered by events occurring at discrete points in continuous time
Revise stats
  • The core idea is to wait until the system stabilises
  • Ensure that the system is not affected by initialisation bias when the measurements are performed
  • Modelling the passage of time
  • Read through some proofs and revise basic stats:
    • Confidence intervals
    • T-distribution
A random sample path through a state transition system

◮ There is a single global “clock” – a virtual time ◮ Transitions are triggered by events which are placed, a.k.a “scheduled”, in time

order on a virtual time line

◮ When an event occurs (“fires”, “is triggered”, ...) at virtual time t, say, it:

◮ Updates the global clock to t ◮ Changes the model state ◮ Schedules zero or more new future events on the time line

◮ This implements an asynchronous model of time, c.f. a synchronous model with fixed time steps (∆t)

◮ The global clock is a floating-point number, now, say ◮ The state is a set of discrete program variables, e.g. integers, booleans, arrays thereof etc.

i.e. time is continuous, but the state space is discrete

◮ The time line is essentially an event diary implemented as a priority queue of

(Event, time) pairs, ordered by time

◮ Events are implemented as objects, functions, procedures, methods etc. ◮ A scheduler adds new (Event, time) pairs to the diary ◮ A descheduler similarly removes them from the diary ◮ Measurement code will need to be added (Otherwise what is the point of simulating?)

For Markov Chains it is important to remember that an even scheduled at an early stage is inherited at the later stages.

◮ “Random” arrival processes are ubiquitous in the real world and are often extremely well

approximated by so-called Poisson processes

◮ The distribution is "memoryless", the P of an event is the same no matter how much time has passed

since the start

◮ The 1-p trick has been mentioned enough to be in the exam

◮ A Poisson arrival process is an arrival “stream” where the inter-arrival times, T, are independent

and exponentially-distributed: P(T ≤ t) = 1 − e(−λt)

◮ λ is often called the processes’ “rate” parameter and is the reciprocal of the average

inter-arrival time

◮ Arrival processes in the real world are often extremely well approximated by Poisson processes

because arrivals are typically i. independent ii. ignorant of previous arrivals and the state of the system the are arriving at.

◮ All of the proofs in the notes are "should be able to do them"

◮ If we merge two Poisson processes with rates λ1 and λ2, the merged process is Poisson with rate

λ1 + λ2:

◮ Supplimentary proofs is a good read -- give it a look

◮ We’ll look at three commonly-used methods:

  • Inverse transform method
  • Acceptance-Rejection (AR) method
  • Convolution method

◮ The Inverse transform method:

  • IF WE CAN INVERT THE FUNCTION
  • We pick a random number between 0, 1
  • Basically equate F to U and solve for x, parameterised with U
  • Careful with constraints with polynomial solutions -- could easily go out of either constraint

◮ The Acceptance-rejection method:

-

◮ ◮

  • As and when we come to the events - we will schedule the next access CW tip
  • The CW could be set up with a data structure with one event for each item
  • The arrival is exponential, hence we can model it with Poisson
  • Each cache item has its own independent arrival stream with the independent rate parameter:
    • Pick the i with that probability
    • Sampling can be done in O(i) with preprocessing